⚠️ DISCLAIMER: INEFFICIENT CODE AHEAD ⚠️

There is a small typo in what's below, in part (b), df/da should be scaled by a $\kappa$ term, but this does not effect the analysis in part (b), and corresponding changes have been made to the code in the later parts.

stability of fixed point

We can show that the fixed point is stable by plotting a phase portrait and if the derivative has a negative slope during the zero-crossing, it means that when [$A$] is below [$A_\mathrm{st}$], the derivative will be positive and [$A$] will increase until it reaches $A_\mathrm{st}$, and when [$A$] is above [$A_\mathrm{st}$], the derivative will be negative and push the concentration to $A_\mathrm{st}$

On the left, we see the negative slopes which indicate stability. For fun, we can also seed 50 initial random conditions (random in both [Ao] and [Ro]), integrate their trajectories, and watch them converge to the stable points. Nice little horizontal funnels of joy.

Do we need ultransensitivity? Here, we've shown that with n = 1 and leakage, there is only one stable fixed point, and sustained oscillations cannot occur. Ultrasensitivity would help. However, I am having trouble understanding how ultrasensitivity would manifest for single-occupancy. In the very first week, I remember discussions about how an additional cooperative binding term could result in higher order terms, but for single occupancy, in my mind all the binding sites are operating independently of each other, so how would this work?

d) ☝️ ✌️ ( ☝️ + ✌️)

n > 1 or n = 4:

\begin{align} 0 &= \alpha + \beta \cfrac{(\kappa a)^4}{1+(\kappa a)^4+r^4} - \gamma a \\[0.5em] 0 &= \gamma (\kappa^4+\gamma^4) a^5 - [\alpha (\kappa^4+\gamma^4) + \beta \kappa^4] a^4 + \gamma a - \alpha \end{align}

By Des Carte's rule of sign, 3 sign changes shows either a maximum of 1 or 3 fixed points on the positive domain of a. So at most we have 3, and if we can show that we have 3, then that is the maximum. Let's plot this fifth order polynomial in a desmos plot (note that all the letters are alphabetical coefficients, and $a$ is $x$). How I went about the fixed points was first choosing the last two coefficients ($\gamma$ and $\alpha$) first, both to be less than 1 (since the first two coefficients are exponentiating $\gamma$, and scaling by $\alpha$ in the second), and then finding the first two coefficients that satisfied 1, 2, 3, then back-solving for $\kappa$ from the first coefficient, and plugging that into the second coefficient for $\beta$.

Note that the plot of the polynomial is not the derivative and will not provide information about stability, but instead takes the steady state condition as a polynomial and shows the zero-s.

The parameters are shown in the table below. For each, I will plot the polynomial to visually check the zero points. Then, we will drop a bunch of initial conditions and see how they behave.

# of fixed points $\alpha$ $\beta$ $\gamma$ $\kappa$
1 0.69 0.59 0.79 1.21
2 0.12 0.51 0.49 1.q65
3 0.12 1.26 0.79 1.21

The trajectories take ~5 seconds to run.

We can see the faint traces of the third fixed point in the last plot even though only two are stable. Choosing parameters for 2 fixed points was tough since the polynomial had to just barely touch the zero axis, but we can see points slowly transition from the unstable fixed point to the stable one.

e) 🌳 ➕ ➖ ❓

I have written down the matrix above in part (b) of this problem, and will use that matrix in the retrieve_eigens function. I am using scipy's brentq function to find roots.

$\alpha$ = 5, $\beta$ = 80

$\gamma$ = 10, $\kappa$ = 10

observations

Thus the overarching conditions we have are: 1) comparable degradation and hill activation coefficients.
for a broader range, rapid decay of activator and tight binding of repressor. 2) large synthesis rates relative to leakage.
this helps the responsiveness/sensitivity of our system. 3) leakage! as was hinted during lectures/office hours, leakage can often be a great thing to "jump-start" our system 4) ultrasensitivity

f) 🌀🌊

Let's explore our purple region! I will make a wrapper function that takes in parameter sets and creates a time widget that allows us to track the points over time in the A-R region.

The time slider shows how the trajectories move in the A-R plane (everything is always counter-clockwise)

thoughts

We thus demonstrate various ways to tune our oscillatory system, but found that it was hard to do so completely independently. The amplitude can be driven up by how fast beta can regenerate under permissible conditions, the degradation and sensitivity roughly control the period of our oscillations, and the leakage controls the sharpness/smoothness of the peaks, but also quickens our oscillations.

g) further investigations... 🔍

Is single occupancy required? what about OR logic?

In a separate notebook (not included since this notebook is already really slow), I noticed that for double occupancy, there are oscillations, albeit they are a bit harder to find (their existence regime is way more sparse). For single occupancy, both AND and OR gives us plenty of oscillations.

In JB's office hours, we talked about how negative feedbacks and positive feedbacks could be combined in the circuit we're studying here. To help me think through this, I tried drawing out the different behaviors of single/double occupancy, and how occupancy structures affect the threshold geometries.

I think in this image, the domains of oscillations can be found in the off-diagonals for both AND and OR gates, when A and R are competitively controlling production/repression. (note: the single OR diagram might be off, but it was the best diagram that explained my observations.)

Extension of amplitude-frequency plot

To really drive my analysis home to make this system easier to study in an experimental context, I would plot various trajectories for various parameter regions (basically take entire blotches of the purple region), and grab data points of the amplitude and freqeuncy from the trajectories and use scipy.signals to decompose the curves, then plot that as a data point, then perform visualizations to see how precise I can get.

Time delay for the repressilator?

After part (h), I realized all the different behaviors you could get by explicitly adding in a time delay. Though not totally relevant to this circuit, I would like to see how an explicit time delay could tune the repressilator. I also had the thought that you need an odd integer for the repressilator-like circuit to work (must be rock-paper-scissors-lizard-spock and not just rock-paper or rock-paper-scissors-lizard). With more species, I was wondering if there would be any modular number of synchronizations you could achieve.

Overall, this problem was very nice bc after the repressilator lecture, I walked away thinking we needed at least three species to generate oscillations, but we were able to do so here with two, and it was a very elegant structure too!

h) explicit time delay ⌛

\begin{align} &f(a, r(t-\tau))\\[0.5em] &f(a(t-\tau_a), r(t-\tau_r)) \end{align}

So. The lesson showcasing the DDE function was very cool, and I wasn't sure how to get it to work with multi-dimensional systems using the package. So I tried writing my own, and I think it works the way it should. The interpolate function basically does the whole B-spline thing then returns the delay concentration at a new time, which is called in trajectory_delay_R. The key line is the R_delay assignment. I am integrating over a time array of size two again and again.

Tuning just $R$

By tuning the magnitude of the delay, we see that we increase both the period and the amplitude together. The shape of our curve is unaffected. This was surprising to me since I didn't expect both curves to remain synchronized. I am not sure if something's funky with my code, or if this is just life. I think this makes sense though, because when repressor levels are low, both are activated and kick up again, and we are only tuning R and not A.

Next, we'll tune both A and R.

I like watching the following transition of ($\tau_A$, $\tau_R$): (0.45, 0.35) to (0.50, 0.35) to (1.90, 1.60). I suspect the reader is busy, so I've incldued the screenshots below.

The transition between the first two is astounding because by merely shifting my time delay in A by 0.5, my frequency becomes ultra-rapid (with minimal loss in amplitude). Although this is not entirely surprising knowing how waves can behave, couple, resonate, etc., it is still a very neat feature. (It makes sense that it is the relative delay time that matters, so maybe only varying one slider while keeping the other fixed can captrue a healthy range of behavior).

If I had more time, I would make an amplitude-frequency portrait of the steady oscillations for various $\tau_A$ and $\tau_R$, but for now, I'll include the time traces of A-R and compare them to the above results

We thus see that delays introduce more complicated, more exciting waveforms, and as expected, the delays take longer to synchronize and relax to its limit cycle with greater $\tau$'s. We also see that the phase portrait is less rectangular (without tuning alpha!), Furthermore, the shape of the A-R portrait here matches that single/double occupancy graphic above, where everything is oscillating around that slight tilt/cut.

There is more to be analyzed and done, but that's all for this week. Oscillators, I'll see ya later!